63 research outputs found

    Probabilistic Bisimulation for Parameterized Systems (Technical Report)

    Get PDF
    Probabilistic bisimulation is a fundamental notion of process equivalence for probabilistic systems. Among others, it has important applications including formalizing the anonymity property of several communication protocols. There is a lot of work on verifying probabilistic bisimulation for finite systems. This is however not the case for parameterized systems, where the problem is in general undecidable. In this paper we provide a generic framework for reasoning about probabilistic bisimulation for parameterized systems. Our approach is in the spirit of software verification, wherein we encode proof rules for probabilistic bisimulation and use a decidable first-order theory to specify systems and candidate bisimulation relations, which can then be checked automatically against the proof rules. As a case study, we show that our framework is sufficiently expressive for proving the anonymity property of the parameterized dining cryptographers protocol and the parameterized grades protocol, when supplied with a candidate regular bisimulation relation. Both of these protocols hitherto could not be verified by existing automatic methods. Moreover, with the help of standard automata learning algorithms, we show that the candidate relations can be synthesized fully automatically, making the verification fully automated

    Challenges in decomposing encodings of verification problems

    Get PDF
    Modern program verifiers use logic-based encodings of the verification problem that are discharged by a back end reasoning engine. However, instances of such encodings for large programs can quickly overwhelm these back end solvers. Hence, we need techniques to make the solving process scale to large systems, such as partitioning (divide-and-conquer) and abstraction. In recent work, we showed how decomposing the formula encoding of a termination analysis can significantly increase efficiency. The analysis generates a sequence of logical formulas with existentially quantified predicates that are solved by a synthesis-based program analysis engine. However, decomposition introduces abstractions in addition to those required for finding the unknown predicates in the formula, and can hence deteriorate precision. We discuss the challenges associated with such decompositions and their interdependencies with the solving process

    Spatial Interpolants

    Full text link
    We propose Splinter, a new technique for proving properties of heap-manipulating programs that marries (1) a new separation logic-based analysis for heap reasoning with (2) an interpolation-based technique for refining heap-shape invariants with data invariants. Splinter is property directed, precise, and produces counterexample traces when a property does not hold. Using the novel notion of spatial interpolants modulo theories, Splinter can infer complex invariants over general recursive predicates, e.g., of the form all elements in a linked list are even or a binary tree is sorted. Furthermore, we treat interpolation as a black box, which gives us the freedom to encode data manipulation in any suitable theory for a given program (e.g., bit vectors, arrays, or linear arithmetic), so that our technique immediately benefits from any future advances in SMT solving and interpolation.Comment: Short version published in ESOP 201

    Hierarchic Superposition Revisited

    Get PDF
    Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide two new completeness results: one for the fragment where all background-sorted terms are ground and another one for a special case of linear (integer or rational) arithmetic as a background theory

    Solving non-linear Horn clauses using a linear Horn clause solver

    Get PDF
    In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm then proceeds by applying the linearisation transformation and solver for linear Horn clauses to a sequence of sets of clauses with successively increasing dimension bound. The approach is then further developed by using a solution of clauses of lower dimension to (partially) linearise clauses of higher dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise.Comment: In Proceedings HCVS2016, arXiv:1607.0403

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality

    GRUNGE: A Grand Unified ATP Challenge

    Full text link
    This paper describes a large set of related theorem proving problems obtained by translating theorems from the HOL4 standard library into multiple logical formalisms. The formalisms are in higher-order logic (with and without type variables) and first-order logic (possibly with multiple types, and possibly with type variables). The resultant problem sets allow us to run automated theorem provers that support different logical formats on corresponding problems, and compare their performances. This also results in a new "grand unified" large theory benchmark that emulates the ITP/ATP hammer setting, where systems and metasystems can use multiple ATP formalisms in complementary ways, and jointly learn from the accumulated knowledge.Comment: CADE 27 -- 27th International Conference on Automated Deductio

    Interpolant tree automata and their application in Horn clause verification

    Get PDF
    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.Comment: In Proceedings VPT 2016, arXiv:1607.0183

    Using Relational Verification for Program Slicing

    Get PDF
    Program slicing is the process of removing statements from a program such that defined aspects of its behavior are retained. For producing precise slices, i.e., slices that are minimal in size, the program\u27s semantics must be considered. Existing approaches that go beyond a syntactical analysis and do take the semantics into account are not fully automatic and require auxiliary specifications from the user. In this paper, we adapt relational verification to check whether a slice candidate obtained by removing some instructions from a program is indeed a valid slice. Based on this, we propose a framework for precise and automatic program slicing. As part of this framework, we present three strategies for the generation of slice candidates, and we show how dynamic slicing approaches - that interweave generating and checking slice candidates - can be used for this purpose. The framework can easily be extended with other strategies for generating slice candidates. We discuss the strengths and weaknesses of slicing approaches that use our framework
    corecore